14 research outputs found

    Realizzazione di algoritmi con calibrazione in tempo reale per il riconoscimento di posture della mano mediante tessuto sensorizzato.

    Get PDF
    Questo lavoro consiste nella realizzazione e sperimentazione di algoritmi di calibrazione in tempo reale per un sistema indossabile, con lo specifico obiettivo di riconoscere un insieme discreto di posture della mano. In particolare, il sistema è stato sviluppato per il guanto in tessuto sensorizzato creato nei laboratori del Centro E. Piaggio dell'Università di Pisa

    Toward Blockchain-based Fashion Wearables in the Metaverse: the Case of Decentraland

    Full text link
    Among the earliest projects to combine the Metaverse and non-fungible tokens (NFTs) we find Decentraland, a blockchain-based virtual world that touts itself as the first to be owned by its users. In particular, the platform's virtual wearables (which allow avatar appearance customization) have attracted much attention from users, content creators, and the fashion industry. In this work, we present the first study to quantitatively characterize Decentraland's wearables, their publication, minting, and sales on the platform's marketplace. Our results indicate that wearables are mostly given away to promote and increase engagement on other cryptoasset or Metaverse projects, and only a small fraction is sold on the platform's marketplace, where the price is mainly driven by the preset wearable's rarity. Hence, platforms that offer virtual wearable NFTs should pay particular attention to the economics around this kind of assets beyond their mere sale.Comment: Accepted in the conference IEEE MetaCom 202

    Http Request Scheduler

    Get PDF
    In the last years, the Web has reshaped around the concept of offer and ask for web services, creating a big distributed system of which these services are the main bricks. This study has the purpose to put in the world of web services a particular component, an HTTP request scheduling service, which can be instructed through a SOAP or REST interface to query or control other services through HTTP at an established time. Cron and other desktop schedulers\u27 power can be exploited to offer new kind of services, leading to new possibilities: to save histories with the first pages of a newspaper, to perform resource-tracking activities, to save different frames taken periodically by a site that manages a webcam, to activate and deactivate other services at a given time. We outlined the architecture of an HTTP Request Scheduler (HRS), implemented a working prototype in Java using the Quartz scheduling framework, and also defined a specific XML language to instruct the component

    Web Language Identification Testing Tool

    Get PDF
    Nowadays a variety of tools for automatic language identification are available. Regardless of the approach used, at least two features can be identified as crucial to evaluate the performances of such tools: the precision of the presented results and the range of languages that can be detected. In this work we shall focus on a subtask of written language identification that is important to preserve and enhance multilinguality in the Web, i.e. detecting the language of a Web page given its URL. Most specifically, the final aim is to verify to which extent under-represented languages are recognized by available tools. The main specificity of Web Language Identification (WLI) lies in the fact that often an HTML page can provide interesting extralinguistic clues (URL domain name, metadata, encoding, etc) that can enhance accuracy. We shall first provide some data and statistics on the presence of languages on the web, secondly discuss existing practices and tools for language identification according to different metrics - for instance the approaches used and the number of supported languages - and finally make some proposals on how to improve current Web Language Identifiers. We shall also present a preliminary WLI service that builds on the Google Chromium Compact Language Detector; the WLI tool allows us to test the Google n-gram based algorithm against an adhoc gold standard of pages in various languages. The gold standard, based on a selection of Wikipedia projects, contains samples in languages for which no automatic recognition has been attempted; it can thus be used by specialists to develop and evaluate WLI systems

    D6.1: Technologies and Tools for Lexical Acquisition

    Get PDF
    This report describes the technologies and tools to be used for Lexical Acquisition in PANACEA. It includes descriptions of existing technologies and tools which can be built on and improved within PANACEA, as well as of new technologies and tools to be developed and integrated in PANACEA platform. The report also specifies the Lexical Resources to be produced. Four main areas of lexical acquisition are included: Subcategorization frames (SCFs), Selectional Preferences (SPs), Lexical-semantic Classes (LCs), for both nouns and verbs, and Multi-Word Expressions (MWEs)

    Data Visualization for Natural Language Representation and Processing

    No full text
    La nostra società produce ogni giorno una grande quantità di testi in centinaia di lingue diverse. A questa enorme ricchezza si aggiungono le risorse per la rappresentazione ed il processing del linguaggio naturale, sviluppate allo scopo di gestire e comprendere il complesso sistema di comunicazione che è la lingua scritta. La visualizzazione grafica può essere un modo per rendere disponibile la comples- sa realtà dei dataset linguistici non solo agli specialisti che si occupano di linguistica e internazionalizzazione, ma anche ai non esperti e ai curiosi, in modo che possano comprendere la loro struttura e il loro importante contenuto. In questo lavoro presentiamo quindi una serie di studi di design per la visualizzazione di risorse linguistiche, che hanno come base le teorie e le best practice sviluppate nel campo della information visualization. --- Every day, our society produces an enormous number of texts, in hundreds of different languages. In addition to this wealth of data, resources for the management, representa- tion and processing of natural language are developed, with the objective of comprehend and handle the complex communication system that written language is. Graphical visualization may prove to be a way to make language-related datasets available not only to the experts of the field of linguistics and internationalization, but also to non-experts and enthusiasts that want to know their structure and the data they contain. In this work we present a series of design studies for the visualization of language re- sources, based on the theories and the best practices developed in the field of information visualization

    The Use of Blockchain for Digital Archives: a comparison between Ethereum and Hyperledger (AIUCD 2019)

    No full text
    Negli ultimi anni la tecnologia blockchain si sta diffondendo sempre più su larga scala in diversi settori di ricerca, inclusi i Beni Culturali. Esistono diverse tipologie di blockchain, che possono essere classificate sia in base al tipo di utenti che possono accedervi, sia in base alle funzionalità che offrono. Questo articolo descrive uno studio teorico in cui si confrontano due blockchain molto diverse tra di loro: Ethereum e Hyperledger, al fine di definire quale delle due è maggiormente indicata per la memorizzazione di beni culturali tangibili contenuti in archivi digitali. Dopo una breve descrizione delle due tecnologie, verrà descritto un possibile scenario di applicazione abbastanza generico per poter capire quale delle due tecnologie meglio soddisfa i requisiti. Verrà quindi effettuato il confronto tra le due blockchain sulla base di problematiche generali, requisiti architetturali e considerazioni varie. Come risultato del confronto, emergerà che Hyperledger Fabric è più adatta nel contesto degli archivi digitali

    Text Encoder and Annotator: an all-in-one editor for transcribing and annotating manuscripts with RDF

    No full text
    Abstract. In the context of the digitization of manuscripts, transcription and annotation are often distinct, sequential steps. This could lead to diculties in improving the transcribed text when annotations have already been dened. In order to avoid this, we devised an approach which merges the two steps into the same process. Text Encoder and Annotator (TEA) is a prototype application embracing this concept. TEA is based on a lightweight language syntax which annotates text using Semantic Web technologies. Our approach is currently being developed within the Clavius on the Web project, devoted to studying the manuscripts of Christophorus Clavius, an inuential 16th century mathematician and astronomer

    Linked Data Maps: Providing a Visual Entry Point for the Exploration of Datasets

    No full text
    Linked Data sets are an ever-growing, invaluable source of information and knowledge. However, the wide adoption of this large amount of interlinked structured data is still held back by some non-trivial obstacles. The one we tackle in this article is the difficulty users have in getting started with their work on Linked Data sources. In fact, querying, and in general dealing with such datasets, requires a deep knowledge about their specific classes, instances and properties. We be- lieve that an entry point that eases the access to such information would significantly reduce the barriers around this technology and foster its promotion. Linked Data Maps is a method for representing RDF graphs as interactive, map-like visualizations, based on our previous work fo- cused on the visual exploration of DBpedia. The approach is extended to deal with a wider range of Linked Data sets, and tested with a user evaluation study on two distinct RDF graphs.

    Towards an Automated Analysis of the Online Supply Chain of Novel Psychoactive Substances

    No full text
    Novel Psychoactive Substances (NPSs), also known as legal highs or smart drugs, are legal alternatives to illegal drugs. Many drugs consumers are appealed by the opportunity of buying substances without any legal consequences. Online shops, virtual marketplaces and other trade channels thrive in this legal grey area. The health risks connected to this phenomenon are high: every year hundreds of people present symptoms deriving to the use or abuse of those unknown chemicals, and health professionals may struggle to provide the appropriate treatments. EU is taking some countermeasures, forbidding the sale of the NPS as soon as it is established their risk for the health, but natural or synthetic new substances are continuously discovered or manufactured, and it is difficult for legislators to keep up. To cope with the lack of regulation in this market sector, EU is funding the CASSANDRA project, to study and comprehend NPSs lifecycle and supply chain, through the automatic analysis of user generated content (forums and social media) and online markets. During the first year of activity, we combined data gathering, analysis, and visualization techniques to i) provide an insight over two large forums, Bluelight and Drugs-forum, that host discussions about drugs since more than a decade; ii) investigate how substances sold by online shops of the NPSs supply chain map inside the forums and iii) investigate how social networks like Facebook and Twitter are used to avertise and discuss drug consumption. In order to gather as much data as possible from forums, we developed an ad-hoc web scraper. The system keeps track of the forum hierarchy and structure, keeping all tags and other metadata associated to posts, threads and forums. All the content is anonymized and stored in a relational database with an associated text indicization. We got a snapshot of Drugs-forum and Bluelight, whose content spans respectively from 2003 and 1999 to today, with more than 1 million and more than 3 millions of posts, and about 200 thousands and 350 thousands of users respectively. A selected set of 10 online NPSs shops underwent a similar scraping and storage phase, while we crawled the social media pages connected to those shops through the provided APIs. We extracted a list of the advertised products in those shops and pages, finding more than 250. The forums have been the starting point and the core of the analyses so far. We developed some interfaces to investigate their structure, the number of posts per thread and the number of posts per user (which, as expected, follow a power low distribution). Moreover, we analysed the textual content of the posts, showing the number of occurrences of terms over time (Figure 1), in which sections a series of known NPSs are first mentioned, the terms co-occurring with other terms. In particular, this last analysis is leading to an automated system to show the most frequent symptoms mentioned together with the name of a substance. We also analyzed the hyperlinks appearing in the forums, and compared them with a comprehensive list of online NPSs shops and related social media accounts, finding that they don?t quite overlap, and which NPSs sold in those shops are mentioned in the forums, finding that almost every substance is mentioned. In the future we plan to extend the analysis to dark web marketplaces. Future work will also involve the development of an automatic system to detect the mention of unknown substances, in order to monitor the discussion about them from the start, to understand where substances are first mentioned and sold, and how the supply chain evolves
    corecore